25 research outputs found

    STUDY OF HAND GESTURE RECOGNITION AND CLASSIFICATION

    Get PDF
    To recognize different hand gestures and achieve efficient classification to understand static and dynamic hand movements used for communications.Static and dynamic hand movements are first captured using gesture recognition devices including Kinect device, hand movement sensors, connecting electrodes, and accelerometers. These gestures are processed using hand gesture recognition algorithms such as multivariate fuzzy decision tree, hidden Markov models (HMM), dynamic time warping framework, latent regression forest, support vector machine, and surface electromyogram. Hand movements made by both single and double hands are captured by gesture capture devices with proper illumination conditions. These captured gestures are processed for occlusions and fingers close interactions for identification of right gesture and to classify the gesture and ignore the intermittent gestures. Real-time hand gestures recognition needs robust algorithms like HMM to detect only the intended gesture. Classified gestures are then compared for the effectiveness with training and tested standard datasets like sign language alphabets and KTH datasets. Hand gesture recognition plays a very important role in some of the applications such as sign language recognition, robotics, television control, rehabilitation, and music orchestration

    Autonomous Systems: Autonomous Systems: Indoor Drone Navigation

    Full text link
    Drones are a promising technology for autonomous data collection and indoor sensing. In situations when human-controlled UAVs may not be practical or dependable, such as in uncharted or dangerous locations, the usage of autonomous UAVs offers flexibility, cost savings, and reduced risk. The system creates a simulated quadcopter capable of autonomously travelling in an indoor environment using the gazebo simulation tool and the ros navigation system framework known as Navigaation2. While Nav2 has successfully shown the functioning of autonomous navigation in terrestrial robots and vehicles, the same hasn't been accomplished with unmanned aerial vehicles and still has to be done. The goal is to use the slam toolbox for ROS and the Nav2 navigation system framework to construct a simulated drone that can move autonomously in an indoor (gps-less) environment

    Cashew dataset generation using augmentation and RaLSGAN and a transfer learning based tinyML approach towards disease detection

    Full text link
    Cashew is one of the most extensively consumed nuts in the world, and it is also known as a cash crop. A tree may generate a substantial yield in a few months and has a lifetime of around 70 to 80 years. Yet, in addition to the benefits, there are certain constraints to its cultivation. With the exception of parasites and algae, anthracnose is the most common disease affecting trees. When it comes to cashew, the dense structure of the tree makes it difficult to diagnose the disease with ease compared to short crops. Hence, we present a dataset that exclusively consists of healthy and diseased cashew leaves and fruits. The dataset is authenticated by adding RGB color transformation to highlight diseased regions, photometric and geometric augmentations, and RaLSGAN to enlarge the initial collection of images and boost performance in real-time situations when working with a constrained dataset. Further, transfer learning is used to test the classification efficiency of the dataset using algorithms such as MobileNet and Inception. TensorFlow lite is utilized to develop these algorithms for disease diagnosis utilizing drones in real-time. Several post-training optimization strategies are utilized, and their memory size is compared. They have proven their effectiveness by delivering high accuracy (up to 99%) and a decrease in memory and latency, making them ideal for use in applications with limited resources

    Comparative study on Judgment Text Classification for Transformer Based Models

    Full text link
    This work involves the usage of various NLP models to predict the winner of a particular judgment by the means of text extraction and summarization from a judgment document. These documents are useful when it comes to legal proceedings. One such advantage is that these can be used for citations and precedence reference in Lawsuits and cases which makes a strong argument for their case by the ones using it. When it comes to precedence, it is necessary to refer to an ample number of documents in order to collect legal points with respect to the case. However, reviewing these documents takes a long time to analyze due to the complex word structure and the size of the document. This work involves the comparative study of 6 different self-attention-based transformer models and how they perform when they are being tweaked in 4 different activation functions. These models which are trained with 200 judgement contexts and their results are being judged based on different benchmark parameters. These models finally have a confidence level up to 99% while predicting the judgment. This can be used to get a particular judgment document without spending too much time searching relevant cases and reading them completely.Comment: 28 pages with 9 figure

    Cultivating Insight: Detecting Autism Spectrum Disorder through Residual Attention Network in Facial Image Analysis

    Get PDF
    Revolutionizing Autism Spectrum Disorder Identification through Deep Learning: Unveiling Facial Activation Patterns. In this study, our primary objective is to harness the power of deep learning algorithms for the precise identification of individuals with autism spectrum disorder (ASD) solely from facial image datasets. Our investigation centers around the utilization of face activation patterns, aiming to uncover novel insights into the distinctive facial features of ASD patients. To accomplish this, we meticulously examined facial imaging data from a global and multidisciplinary repository known as the Autism Face Imaging Data Exchange. Autism spectrum disorder is characterized by inherent social deficits and manifests in a spectrum of diverse symptomatic scenarios. Recent data from the Centers for Disease Control (CDC) underscores the significance of this disorder, indicating that approximately 1 in 54 children are impacted by ASD, according to estimations from the CDC's Autism and Developmental Disabilities Monitoring Network (ADDM). Our research delved into the intricate functional connectivity patterns that objectively distinguish ASD participants, focusing on their facial imaging data. Through this investigation, we aimed to uncover the latent facial patterns that play a pivotal role in the classification of ASD cases. Our approach introduces a novel module that enhances the discriminative potential of standard convolutional neural networks (CNNs), such as ResNet-50, thus significantly advancing the state-of-the-art. Our model achieved an impressive accuracy rate of 99% in distinguishing between ASD patients and control subjects within the dataset. Our findings illuminate the specific facial expression domains that contribute most significantly to the differentiation of ASD cases from typically developing individuals, as inferred from our deep learning methodology. To validate our approach, we conducted real-time video testing on diverse children, achieving an outstanding accuracy score of 99.90% and an F1 score of 99.67%. Through this pioneering work, we not only offer a cutting-edge approach to ASD identification but also contribute to the understanding of the underlying facial activation patterns that hold potential for transforming the diagnostic landscape of autism spectrum disorder

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    Clonage gestuel expressif

    No full text
    Virtual environments allow human beings to be represented by virtual humans or avatars. Users can share a sense of virtual presence is the avatar looks like the real human it represents. This classically involves turning the avatar into a clone with the real human’s appearance and voice. However, the possibility of cloning the gesture expressivity of a real person has received little attention so far. Gesture expressivity combines the style and mood of a person. Expressivity parameters have been defined in earlier works for animating embodied conversational agents.In this work, we focus on expressivity in wrist motion. First, we propose algorithms to estimate three expressivity parameters from captured wrist 3D trajectories: repetition, spatial extent and temporal extent. Then, we conducted perceptual study through a user survey the relevance of expressivity for recognizing individual human. We have animated a virtual agent using the expressivity estimated from individual humans, and users have been asked whether they can recognize the individual human behind each animation. We found that, in case gestures are repeated in the animation, this is perceived by users as a discriminative feature to recognize humans, while the absence of repetition would be matched with any human, regardless whether they repeat gesture or not. More importantly, we found that 75 % or more of users could recognize the real human (out of two proposed) from an animated virtual avatar based only on the spatial and temporal extents. Consequently, gesture expressivity is a relevant clue for cloning. It can be used as another element in the development of a virtual clone that represents a personLes environnements virtuels permettent de représenter des personnes par des humains virtuels ou avatars. Le sentiment de présence virtuelle entre utilisateurs est renforcé lorsque l’avatar ressemble à la personne qu’il représente. L’avatar est alors classiquement un clone de l’utilisateur qui reproduit son apparence et sa voix. Toutefois, la possibilité de cloner l’expressivité des gestes d’une personne a reçu peu d’attention jusqu’ici. Expressivité gestuelle combine le style et l’humeur d’une personne. Des paramètres décrivant l’expressivité ont été proposés dans des travaux antérieurs pour animer les agents conversationnels. Dans ce travail, nous nous intéressons à l’expressivité des mouvements du poignet. Tout d’abord, nous proposons des algorithmes pour estimer trois paramètres d’expressivité à partir des trajectoires dans l’espace du poignet : la répétition, l’étendue spatiale et l’étendue temporelle. Puis, nous avons mené une étude perceptive sur la pertinence de l’expressivité des gestes pour reconnaître des personnes. Nous avons animé un agent virtuel en utilisant l’expressivité estimée de personnes réelles, et évalué si des utilisateurs peuvent reconnaître ces personnes à partir des animations. Nous avons constaté que des gestes répétitifs dans l’animation constituent une caractéristique discriminante pour reconnaître les personnes, tandis que l’absence de répétition est associée à des personnes qui répètent des gestes ou non. Plus important, nous avons trouvé que 75% ou plus des utilisateurs peuvent reconnaître une personne (parmi deux proposée) à partir d’animations virtuelles qui ne diffèrent que par leurs étendues spatiales et temporelles. L’expressivité gestuelle apparaît donc comme un nouvel indice pertinent pour le clonage d’une personn

    Animating a conversational agent with user expressivity

    No full text
    International audienceOur objective is to animate an embodied conversational agent (ECA) with communicative gestures rendered with the expressivity of a real human user it represents. We describe an approach to estimate a subset of expressivity parameters defined in the literature (namely spatial and temporal extent) from captured motion trajectories. We first validate this estimation against synthesis motion and then show results with real human motion. The estimated expressivity is then sent to the animation engine of an ECA that becomes a personalized autonomous representative of that use

    Proper Orthogonal Decomposition Technique for Near-Optimal Control of Flexible Aircraft Wings

    No full text
    An Aeroelastic study of a flight vehicle has been a subject of great interest and research. Its importance lies in the achieving better performance, safety operation (e.g. aileron reversal, flutter analysis) and related analysis in the field of aeronautics. Structural dynamics of an aircraft wing characterized by aeroelastic nature is modeled as partial differential equation. The study of these equations comes under distributed parameter system and control design of these systems is very complex as compared to lumped parameter systems defined by ordinary differential equation. In present paper we present a stabilizing state-feedback control design approach for the second order system dynamics which completely represents the heave dynamics of wing-fuselage model. This approach is presented for a class of systems when there is a continuous actuator in the spatial domain. The control methodology is designed by combining the technique of “Proper Orthogonal Decomposition” (POD) and approximate dynamic programming (ADP)
    corecore